296 research outputs found

    Switched networks with maximum weight policies: Fluid approximation and multiplicative state space collapse

    Full text link
    We consider a queueing network in which there are constraints on which queues may be served simultaneously; such networks may be used to model input-queued switches and wireless networks. The scheduling policy for such a network specifies which queues to serve at any point in time. We consider a family of scheduling policies, related to the maximum-weight policy of Tassiulas and Ephremides [IEEE Trans. Automat. Control 37 (1992) 1936--1948], for single-hop and multihop networks. We specify a fluid model and show that fluid-scaled performance processes can be approximated by fluid model solutions. We study the behavior of fluid model solutions under critical load, and characterize invariant states as those states which solve a certain network-wide optimization problem. We use fluid model results to prove multiplicative state space collapse. A notable feature of our results is that they do not assume complete resource pooling.Comment: Published in at http://dx.doi.org/10.1214/11-AAP759 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Inferring Rankings Using Constrained Sensing

    Full text link
    We consider the problem of recovering a function over the space of permutations (or, the symmetric group) over nn elements from given partial information; the partial information we consider is related to the group theoretic Fourier Transform of the function. This problem naturally arises in several settings such as ranked elections, multi-object tracking, ranking systems, and recommendation systems. Inspired by the work of Donoho and Stark in the context of discrete-time functions, we focus on non-negative functions with a sparse support (support size ≪\ll domain size). Our recovery method is based on finding the sparsest solution (through ℓ0\ell_0 optimization) that is consistent with the available information. As the main result, we derive sufficient conditions for functions that can be recovered exactly from partial information through ℓ0\ell_0 optimization. Under a natural random model for the generation of functions, we quantify the recoverability conditions by deriving bounds on the sparsity (support size) for which the function satisfies the sufficient conditions with a high probability as n→∞n \to \infty. ℓ0\ell_0 optimization is computationally hard. Therefore, the popular compressive sensing literature considers solving the convex relaxation, ℓ1\ell_1 optimization, to find the sparsest solution. However, we show that ℓ1\ell_1 optimization fails to recover a function (even with constant sparsity) generated using the random model with a high probability as n→∞n \to \infty. In order to overcome this problem, we propose a novel iterative algorithm for the recovery of functions that satisfy the sufficient conditions. Finally, using an Information Theoretic framework, we study necessary conditions for exact recovery to be possible.Comment: 19 page

    Finding Rumor Sources on Random Trees

    Get PDF
    We consider the problem of detecting the source of a rumor which has spread in a network using only observations about which set of nodes are infected with the rumor and with no information as to \emph{when} these nodes became infected. In a recent work \citep{ref:rc} this rumor source detection problem was introduced and studied. The authors proposed the graph score function {\em rumor centrality} as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading times for regular trees. They showed that as the size of the infected graph increases, for a path graph (2-regular tree), the probability of source detection goes to 00 while for dd-regular trees with d≥3d \geq 3 the probability of detection, say αd\alpha_d, remains bounded away from 00 and is less than 1/21/2. However, their results stop short of providing insights for the performance of the rumor centrality estimator in more general settings such as irregular trees or the SI model with non-exponential spreading times. This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a continuous time branching process and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all previous results as a special case and obtain a variety of novel results including the {\em universality} of rumor centrality in the context of tree-like graphs and the SI model with a generic spreading time distribution.Comment: 38 pages, 6 figure

    Q-learning with Nearest Neighbors

    Full text link
    We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a dd-dimensional state space and the discounted factor γ∈(0,1)\gamma \in (0,1), given an arbitrary sample path with "covering time" L L , we establish that the algorithm is guaranteed to output an ε\varepsilon-accurate estimate of the optimal Q-function using O~(L/(ε3(1−γ)7))\tilde{O}\big(L/(\varepsilon^3(1-\gamma)^7)\big) samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as O~(1/εd), \tilde{O}\big(1/\varepsilon^d\big), so the sample complexity scales as O~(1/εd+3).\tilde{O}\big(1/\varepsilon^{d+3}\big). Indeed, we establish a lower bound that argues that the dependence of Ω~(1/εd+2) \tilde{\Omega}\big(1/\varepsilon^{d+2}\big) is necessary.Comment: Accepted to NIPS 201

    Belief propagation : an asymptotically optimal algorithm for the random assignment problem

    Get PDF
    The random assignment problem asks for the minimum-cost perfect matching in the complete n×nn\times n bipartite graph \Knn with i.i.d. edge weights, say uniform on [0,1][0,1]. In a remarkable work by Aldous (2001), the optimal cost was shown to converge to ζ(2)\zeta(2) as n→∞n\to\infty, as conjectured by M\'ezard and Parisi (1987) through the so-called cavity method. The latter also suggested a non-rigorous decentralized strategy for finding the optimum, which turned out to be an instance of the Belief Propagation (BP) heuristic discussed by Pearl (1987). In this paper we use the objective method to analyze the performance of BP as the size of the underlying graph becomes large. Specifically, we establish that the dynamic of BP on \Knn converges in distribution as n→∞n\to\infty to an appropriately defined dynamic on the Poisson Weighted Infinite Tree, and we then prove correlation decay for this limiting dynamic. As a consequence, we obtain that BP finds an asymptotically correct assignment in O(n2)O(n^2) time only. This contrasts with both the worst-case upper bound for convergence of BP derived by Bayati, Shah and Sharma (2005) and the best-known computational cost of Θ(n3)\Theta(n^3) achieved by Edmonds and Karp's algorithm (1972).Comment: Mathematics of Operations Research (2009
    • …
    corecore